cloud computer
Opera Neon browser launches with built-in AI and a monthly fee
Opera is resurrecting Opera Neon, a browser concept first introduced in 2017, and equipping it with the latest tech trend: agentic AI--an assistant you can assign tasks to, which it will carry out autonomously. Opera Neon will work like a normal browser. Opera, however, is integrating local AI that you can chat with privately and ask to do tasks and combining it with an interface to a remote server that will serve as a workspace of sorts for Opera Neon's AI creation tools. Most browsers are free; the twist here is that Opera Neon will require a paid subscription of an unknown amount, and potential users will be subject to a waitlist. Opera has a history of experimenting with innovative concepts--it was an early proponent of VPNs, for example.
- Media > Music (0.32)
- Leisure & Entertainment (0.32)
FogROS2: An Adaptive Platform for Cloud and Fog Robotics Using ROS 2
Ichnowski, Jeffrey, Chen, Kaiyuan, Dharmarajan, Karthik, Adebola, Simeon, Danielczuk, Michael, Mayoral-Vilches, Vıctor, Jha, Nikhil, Zhan, Hugo, LLontop, Edith, Xu, Derek, Buscaron, Camilo, Kubiatowicz, John, Stoica, Ion, Gonzalez, Joseph, Goldberg, Ken
Mobility, power, and price points often dictate that robots do not have sufficient computing power on board to run contemporary robot algorithms at desired rates. Cloud computing providers such as AWS, GCP, and Azure offer immense computing power and increasingly low latency on demand, but tapping into that power from a robot is non-trivial. We present FogROS2, an open-source platform to facilitate cloud and fog robotics that is included in the Robot Operating System 2 (ROS 2) distribution. FogROS2 is distinct from its predecessor FogROS1 in 9 ways, including lower latency, overhead, and startup times; improved usability, and additional automation, such as region and computer type selection. Additionally, FogROS2 gains performance, timing, and additional improvements associated with ROS 2. In common robot applications, FogROS2 reduces SLAM latency by 50 %, reduces grasp planning time from 14 s to 1.2 s, and speeds up motion planning 45x. When compared to FogROS1, FogROS2 reduces network utilization by up to 3.8x, improves startup time by 63 %, and network round-trip latency by 97 % for images using video compression. The source code, examples, and documentation for FogROS2 are available at https://github.com/BerkeleyAutomation/FogROS2, and is available through the official ROS 2 repository at https://index.ros.org/p/fogros2/.
- North America > United States > California > Alameda County > Berkeley (0.14)
- South America > Ecuador (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (3 more...)
- Research Report (0.50)
- Overview (0.46)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (0.93)
Nvidia and Microsoft to build massive AI cloud computer - Sleuth Technical
Help us grow:) Please share our articles for reach. Nvidia announced a collaboration with Microsoft to build a "massive" cloud computer focused on AI. Their plan is to use tens of thousands of high-end Nvidia GPUs for applications like deep learning and large language models. The companies aim to make it one of the most powerful AI supercomputers in the world. Meanwhile, Microsoft will contribute its Azure cloud infrastructure and ND- and NC-series virtual machines.
Nvidia and Microsoft team up to build massive AI cloud computer
On Wednesday, Nvidia announced a collaboration with Microsoft to build a "massive" cloud computer focused on AI. It will reportedly use tens of thousands of high-end Nvidia GPUs for applications like deep learning and large language models. The companies aim to make it one of the most powerful AI supercomputers in the world. In turn, the new supercomputer will feature thousands of units of what is arguably the most powerful GPU in the world, the Hopper H100, which Nvidia launched in October. Nvidia will also provide its second most powerful GPU, the A100, and utilize its Quantum-2 InfiniBand networking platform, which can transfer data at 400 gigabits per second between servers, linking them together into a powerful cluster.